Goal-Oriented Sensitivity Analysis of Hyperparameters in Deep Learning

نویسندگان

چکیده

Tackling new machine learning problems with neural networks always means optimizing numerous hyperparameters that define their structure and strongly impact performances. In this work, we study the use of goal-oriented sensitivity analysis, based on Hilbert–Schmidt independence criterion (HSIC), for hyperparameter analysis optimization. Hyperparameters live in spaces are often complex awkward. They can be different natures (categorical, discrete, boolean, continuous), interact, have inter-dependencies. All makes it non-trivial to perform classical analysis. We alleviate these difficulties obtain a robust index is able quantify hyperparameters’ relative network’s final error. This valuable tool allows us better understand make optimization more interpretable. illustrate benefits knowledge context derive an HSIC-based algorithm apply MNIST Cifar, data sets, but also approximation Runge function Bateman equations solution, interest scientific learning. method yields both competitive cost-effective.

منابع مشابه

Goal-oriented sensitivity analysis for lattice kinetic Monte Carlo simulations.

In this paper we propose a new class of coupling methods for the sensitivity analysis of high dimensional stochastic systems and in particular for lattice Kinetic Monte Carlo (KMC). Sensitivity analysis for stochastic systems is typically based on approximating continuous derivatives with respect to model parameters by the mean value of samples from a finite difference scheme. Instead of using ...

متن کامل

Using Deep Q-Learning to Control Optimization Hyperparameters

We present a novel definition of the reinforcement learning state, actions and reward function that allows a deep Q-network (DQN) to learn to control an optimization hyperparameter. Using Q-learning with experience replay, we train two DQNs to accept a state representation of an objective function as input and output the expected discounted return of rewards, or q-values, connected to the actio...

متن کامل

Stealing Hyperparameters in Machine Learning

Hyperparameters are critical in machine learning, as different hyperparameters often result in models with significantly different performance. Hyperparameters may be deemed confidential because of their commercial value and the confidentiality of the proprietary algorithms that the learner uses to learn them. In this work, we propose attacks on stealing the hyperparameters that are learnt by a...

متن کامل

Goal-oriented Analysis of Regulations

This paper explains how goal-oriented requirements engineering can be transposed into regulation modelling. It motivates also why this way of modelling regulations is worthwhile for people responsible for preparing regulations. In addition, the paper recounts how the approach has been applied to model ICAO Security Regulation for Civil Aviation in the context of the

متن کامل

Using clinical information in goal-oriented learning.

We have proposed an extension to the Q-learning algorithm that incorporates the existing clinical expertise into the trial-and-error process of acquiring an appropriate administration strategy of rHuEPO to patients with anemia due to ESRD. The specific modification lies in multiple updates of the Q-values for several dose/response combinations during a single learning event. This in turn decrea...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Journal of Scientific Computing

سال: 2023

ISSN: ['1573-7691', '0885-7474']

DOI: https://doi.org/10.1007/s10915-022-02083-4